Tufts Security and Privacy Lab investigates user experience
As the world becomes increasingly digitized, cybersecurity is more important than ever. Tufts University is a leader in this growing field, with faculty members like Lin Family Assistant Professor Daniel Votipka of the Department of Computer Science working to ensure the safety and success of these systems. Votipka is the co-director of the Tufts Security and Privacy Lab (TSP), which focuses on how people build, operate, use, and defend systems. Several recent TSP papers investigate current privacy protections or information verification protocols and evaluate their effectiveness from a human factors perspective.
Evaluating user perceptions of social media content
One paper from the TSP lab focused on social media users were presented at the Symposium on Usable Privacy and Security (SOUPS) 2024. One paper explored how changes to social media verification policies have affected users’ perceptions of posts from verified accounts. Despite the growing difference in verification criteria between platforms X/Twitter and Facebook, the researchers found the differences had little effect on users’ perceptions of post credibility. However, when informed about the verification policy, users perceived posts by verified accounts as more credible when accounts were only verified if they were notable and after they provided proof of identity via a government issued ID. Interestingly, while users were annoyed by platforms that required payment for verification (i.e., X/Twitter and Facebook), this did not change their perception of verified account. The project brought together researchers from several universities including first author and recent alum Nickolas Gravel, EG23, undergraduate student Christopher Pellegrini, E24, and Votipka from Tufts, as well as co-authors Callahan Family Professor Micah Sherr of Georgetown University, and Associate Professor Michelle L. Mazurek of the University of Maryland.
Determining best practices through interviews
Other recent papers from the TSP lab developed sets of recommendations based on the lived experience and preferences of interviewees in particular focus areas.
One group turned their attention to the experiences of cybersecurity professionals. With only 10% of professionals in the field identifying as women, Latine, and/or Black, the field lacks racial and gender diversity. PhD student Samantha Katcher, recent undergraduate alum Liana Wang, E22, Caroline Chin, E24, and Votipka worked with CEO of SustainCyber Chloé Messdaghi, Associate Professor Michelle Mazurek of the University of Maryland, Associate Professor Marshini Chetty of the University of Chicago, and Assistant Professor Kelsey Fulton of the Colorado School of Mines. The research team interviewed and surveyed groups of cybersecurity professionals about their experiences in the field, and found that women were more likely to report harassment and that all groups reported a low level of psychological safety. Based on these insights, the researchers assembled a set of recommendations to cybersecurity community leaders to encourage more equitable participation among all groups. The resulting paper was presented at SOUPS.
Another group focused on the Family Educational Rights and Privacy Act (FERPA), which has been protecting the privacy of student educational records since 1974. First author and PhD student at Harvard University Sarah Radway, undergraduate students Katherine Quintanilla, E25, and Cordelia Ludden, E26, and Votipka reviewed how current FERPA information sharing practices have evolved with the growth of technology. Despite being created to protect student privacy, many universities shared FERPA student directory data, including personally identifiable information, with third-party data brokers upon request. Based on interviews with students across several universities, the researchers provided universities with concrete recommendations about how to manage FERPA student directory data based on student preferences and cybersecurity best practices. The resulting paper was presented at the Association of Computing Machinery’s Conference on Human Factors in Computing Systems (CHI) in May.
A fourth paper, presented at the USENIX Security Symposium, investigated threat modeling, a practice that involves testing hypothetical scenarios to make sure that a product or software is secure. Threat modeling identifies potential weaknesses so that they can be addressed before they cause significant security issues. First author and PhD student Ronald Thompson and PhD student Carson Powers worked with recent computer science alum Madeline McLaughlin, A23, and Votipka to assess the effectiveness of current threat modeling methods for medical devices. As automation becomes a key component of many threat modeling processes, the researchers evaluated how well emerging processes and automation fit with current practices from security experts. Using insights from interviews with medical device security experts, the researchers aimed to align threat modeling practices with established practices for improved security.
Improving cybersecurity human factors
In the Tufts Security and Privacy Lab and across the university, Tufts researchers remain on the leading edge of cybersecurity. Through studying current user processes and perceptions in relation to technology, researchers uncover effective ways to implement cybersecurity best practices. Their projects encompass a range of users, including cybersecurity and threat modeling professionals, average social media users, and large clients such as universities. Taking a human-centered approach ensures that recommendations are realistic to implement and fit with the ways that users are already interacting with the technology.
Learn more about Lin Family Assistant Professor Daniel Votipka and the Tufts Security and Privacy Lab.
Department:
Computer Science